Goto

Collaborating Authors

 heterogeneous feature space


Transfer Learning on Heterogeneous Feature Spaces for Treatment Effects Estimation

Neural Information Processing Systems

Consider the problem of improving the estimation of conditional average treatment effects (CATE) for a target domain of interest by leveraging related information from a source domain with a different feature space. This heterogeneous transfer learning problem for CATE estimation is ubiquitous in areas such as healthcare where we may wish to evaluate the effectiveness of a treatment for a new patient population for which different clinical covariates and limited data are available. In this paper, we address this problem by introducing several building blocks that use representation learning to handle the heterogeneous feature spaces and a flexible multi-task architecture with shared and private layers to transfer information between potential outcome functions across domains. Then, we show how these building blocks can be used to recover transfer learning equivalents of the standard CATE learners. On a new semi-synthetic data simulation benchmark for heterogeneous transfer learning, we not only demonstrate performance improvements of our heterogeneous transfer causal effect learners across datasets, but also provide insights into the differences between these learners from a transfer perspective.






Handling Learnwares from Heterogeneous Feature Spaces with Explicit Label Exploitation

Neural Information Processing Systems

The learnware paradigm aims to help users leverage numerous existing high-performing models instead of starting from scratch, where a learnware consists of a well-trained model and the specification describing its capability. Numerous learnwares are accommodated by a learnware dock system. When users solve tasks with the system, models that fully match the task feature space are often rare or even unavailable. However, models with heterogeneous feature space can still be helpful. This paper finds that label information, particularly model outputs, is helpful yet previously less exploited in the accommodation of heterogeneous learnwares. We extend the specification to better leverage model pseudo-labels and subsequently enrich the unified embedding space for better specification evolvement.


Transfer Learning on Heterogeneous Feature Spaces for Treatment Effects Estimation

Neural Information Processing Systems

Consider the problem of improving the estimation of conditional average treatment effects (CATE) for a target domain of interest by leveraging related information from a source domain with a different feature space. This heterogeneous transfer learning problem for CATE estimation is ubiquitous in areas such as healthcare where we may wish to evaluate the effectiveness of a treatment for a new patient population for which different clinical covariates and limited data are available. In this paper, we address this problem by introducing several building blocks that use representation learning to handle the heterogeneous feature spaces and a flexible multi-task architecture with shared and private layers to transfer information between potential outcome functions across domains. Then, we show how these building blocks can be used to recover transfer learning equivalents of the standard CATE learners. On a new semi-synthetic data simulation benchmark for heterogeneous transfer learning, we not only demonstrate performance improvements of our heterogeneous transfer causal effect learners across datasets, but also provide insights into the differences between these learners from a transfer perspective.


Transfer Learning on Heterogeneous Feature Spaces for Treatment Effects Estimation

Bica, Ioana, van der Schaar, Mihaela

arXiv.org Artificial Intelligence

Consider the problem of improving the estimation of conditional average treatment effects (CATE) for a target domain of interest by leveraging related information from a source domain with a different feature space. This heterogeneous transfer learning problem for CATE estimation is ubiquitous in areas such as healthcare where we may wish to evaluate the effectiveness of a treatment for a new patient population for which different clinical covariates and limited data are available. In this paper, we address this problem by introducing several building blocks that use representation learning to handle the heterogeneous feature spaces and a flexible multi-task architecture with shared and private layers to transfer information between potential outcome functions across domains. Then, we show how these building blocks can be used to recover transfer learning equivalents of the standard CATE learners. On a new semi-synthetic data simulation benchmark for heterogeneous transfer learning we not only demonstrate performance improvements of our heterogeneous transfer causal effect learners across datasets, but also provide insights into the differences between these learners from a transfer perspective.


Heterogeneous Transfer Learning: An Unsupervised Approach

Liu, Feng, Zhang, Guanquan, Lu, Jie

arXiv.org Machine Learning

Transfer learning leverages the knowledge in one domain, the source domain, to improve learning efficiency in another domain, the target domain. Existing transfer learning research is relatively well-progressed, but only in situations where the feature spaces of the domains are homogeneous and the target domain contains at least a few labeled instances. However, transfer learning has not been well-studied in heterogeneous settings with an unlabeled target domain. To contribute to the research in this emerging field, this paper presents: (1) an unsupervised knowledge transfer theorem that prevents negative transfer; and (2) a principal angle-based metric to measure the distance between two pairs of domains. The metric shows the extent to which homogeneous representations have preserved the information in original source and target domains. The unsupervised knowledge transfer theorem sets out the transfer conditions necessary to prevent negative transfer. Linear monotonic maps meet the transfer conditions of the theorem and, hence, are used to construct homogeneous representations of the heterogeneous domains, which in principle prevents negative transfer. The metric and the theorem have been implemented in an innovative transfer model, called a Grassmann-LMM-geodesic flow kernel (GLG), that is specifically designed for knowledge transfer across heterogeneous domains. The GLG model learns homogeneous representations of heterogeneous domains by minimizing the proposed metric. Knowledge is transferred through these learned representations via a geodesic flow kernel. Notably, the theorem presented in this paper provides the sufficient transfer conditions needed to guarantee that knowledge is transferred from a source domain to an unlabeled target domain with correctness.